本文通过匹配的追求方法开发了一类低复杂设备调度算法,以实现空中联合学习。提出的方案紧密跟踪了通过差异编程实现的接近最佳性能,并且基于凸松弛的众所周知的基准算法极大地超越了众所周知的基准算法。与最先进的方案相比,所提出的方案在系统上构成了较低的计算负载:对于$ k $设备和参数服务器上的$ n $ antennas,基准的复杂性用$ \ left缩放(n^)2 + k \ right)^3 + n^6 $,而提出的方案量表的复杂性则以$ 0 <p,q \ leq 2 $为$ k^p n^q $。通过CIFAR-10数据集上的数值实验证实了所提出的方案的效率。
translated by 谷歌翻译
带有像素天标签的注释图像是耗时和昂贵的过程。最近,DataSetGan展示了有希望的替代方案 - 通过利用一小组手动标记的GaN生成的图像来通过生成的对抗网络(GAN)来综合大型标记数据集。在这里,我们将DataSetGan缩放到ImageNet类别的规模。我们从ImageNet上训练的类条件生成模型中拍摄图像样本,并为所有1K类手动注释每个类的5张图像。通过在Biggan之上培训有效的特征分割架构,我们将Bigan转换为标记的DataSet生成器。我们进一步表明,VQGan可以类似地用作数据集生成器,利用已经注释的数据。我们通过在各种设置中标记一组8K实图像并在各种设置中评估分段性能来创建一个新的想象因基准。通过广泛的消融研究,我们展示了利用大型生成的数据集来培训在像素 - 明智的任务上培训不同的监督和自我监督的骨干模型的大增益。此外,我们证明,使用我们的合成数据集进行预培训,以改善在几个下游数据集上的标准Imagenet预培训,例如Pascal-VOC,MS-Coco,Citycapes和Chink X射线以及任务(检测,细分)。我们的基准将公开并维护一个具有挑战性的任务的排行榜。项目页面:https://nv-tlabs.github.io/big-dataseTgan/
translated by 谷歌翻译
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement the state-ofthe-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects 1 .
translated by 谷歌翻译